8 research outputs found

    Minimizing the Multi-view Stereo Reprojection Error for Triangular Surface Meshes

    Get PDF
    International audienceThis article proposes a variational multi-view stereo vision method based on meshes for recovering 3D scenes (shape and radiance) from images. Our method is based on generative models and minimizes the reprojection error (difference between the observed images and the images synthesized from the reconstruction). Our contributions are twofold. 1) For the first time, we rigorously compute the gradient of the reprojection error for non smooth surfaces defined by discrete triangular meshes. The gradient correctly takes into account the visibility changes that occur when a surface moves; this forces the contours generated by the reconstructed surface to perfectly match with the apparent contours in the input images. 2) We propose an original modification of the Lambertian model to take into account deviations from the constant brightness assumption without explicitly modelling the reflectance properties of the scene or other photometric phenomena involved by the camera model. Our method is thus able to recover the shape and the diffuse radiance of non Lambertian scenes

    Virtual camera synthesis for soccer game replays

    Get PDF
    International audienceIn this paper, we present a set of tools developed during the creation of a platform that allows the automatic generation of virtual views in a live soccer game production. Observing the scene through a multi-camera system, a 3D approximation of the players is computed and used for the synthesis of virtual views. The system is suitable both for static scenes, to create bullet time effects, and for video applications, where the virtual camera moves as the game plays

    Contributions à l'approche bayésienne pour la stéréovision multi-vues

    No full text
    Multi-view stereo is the problem of recovering the shape of objects from multiple images taken from different but known camera positions. It is an inverse problem where we want to find the cause (the object) given the effect (the images). From a Bayesian perspective, the solution would be the reconstruction that best reproduces the input images while at the same time being plausible a priori. Taking this approach, in this thesis we develop generative models and methods for computing reconstructions that minimize the difference between the observed images and the images sythetized from the reconstruction.Three models are presented. The first, represents the reconstructed scene by a set of depth maps. This gives high resolution results, but have problems at the objects boundaries. The second model represents the scene by a discreet occupancy grid, yielding to a combinatorial optimization problem, which is addressed through message passing techniques. The final model represents the scene by a smooth surface and the resulting optimization problem is solved via gradient descent surface evolution.In either model, the main difficulty is to correctly take into account the occlusions. Modeling self-occlusions results in optimization problems that challenge current optimization techniques. In this respect, the main result of the thesis is the computation of the derivative of the reprojection error with respect to surface variations taking into account the visibility changes that occur while the surface moves. This enables the use of gradient descent techniques, and leads to surface evolutions that place the contour generators of the surface to their correct location in the images without the need of additional silhouettes or apparent contours constraints.La stéréovision multi-vues consiste à retrouver la forme des objets à partir de plusieurs images prises de différents points de vue connus. Ceci est un problème inverse où on cherche la cause (l'objet) alors qu'on observe l'effet (les images). Sous une optique bayésienne, la solution serait une reconstruction qui reproduise au mieux les images observées tout en restant plausible a priori. Dans cette thèse, nous présentons des modèles et des méthodes permettant de minimiser la différence entre les images observées et les images obtenues par le rendu de la reconstruction. Pour ceci, il est nécessaire de tenir compte des occultations qui on lieu lors du rendu. Le résultat principal de la thése est le calcul de la dérivée de l'erreur de reprojection par rapport aux variations de surface qui tiens en compte les changements de visibilité lors que la surface se déforme

    A narrow band method for the convex formulation of discrete multi-label problems

    No full text
    International audienceWe study a narrow band type algorithm to solve a discrete formulation of the convex relaxation of energy functionals with total variation regularization and nonconvex data terms. We prove that this algorithm converges to a local minimum of the original nonlinear optimization problem. We illustrate the algorithm with experiments for disparity computation in stereo and a multilabel segmentation problem, and we check experimentally that the energy of the local minimum is very near to the energy of the global minimum obtained without the narrow band type method

    Polyconvexification of the multi-label optical flow problem

    No full text
    International audienceIn this paper the problem of optical flow and occlusion mask estimation is aborded. To that end, we consider a multi-label representation of the optical flow and we define an energy that models the problem. The convexification of the energy and its minimization with an iterative algorithm are studied. Our algorithm is implemented in GPU, since each pixel can be processed in parallel. From our experiments, the relation between the quality of the results obtained and computing time seems to be very promising

    Stereoscopic Image Inpainting using scene geometry

    Get PDF
    International audienceIn this paper we propose an algorithm for stereoscopic image inpainting, given the inpainting mask in both images. We also assume that depth map is known in one of the images of the stereo pair, taken as reference. This image is clustered in homogeneous color regions using a mean-shift procedure. In each clustered region, depths are fitted by planes and then extended into the mask. Then we inpaint the visible parts of each extended region using a modified exemplar-based inpainting algorithm. Finally, we extend the algorithm to stereoscopic image inpainting. We display some experiments showing the performance of the proposed algorithm

    Coherent Background Video Inpainting through Kalman Smoothing along Trajectories

    Get PDF
    International audienceVideo inpainting consists in recovering the missing or corrupted parts of an image sequence so that the reconstructed sequence looks natural. For each frame, the reconstruction has to be spatially coherent with the rest of the image and temporally with respect to the reconstructions of adjacent frames. There have been many methods proposed in the recent years. Most of them only focus on inpainting foreground objects moving with a periodic motion and consider that the background is almost static. In this paper we address the problem of background inpainting and propose a method that handles dynamic background (illumination changes, moving camera, dynamic textures...). The algorithm starts by applying an image inpainting technique to each frame of the sequence and then temporally smoothes these reconstructions through Kalman smoothing along the estimated trajectories of the unknown points. The computation of the trajectories relies on the estimation of forward and backward dense optical flow fields. Several experiments and comparisons demonstrate the performance of the proposed approach. \
    corecore